Accuracy of Computed Eigenvectors via Optimizing a Rayleigh Quotient 1 Accuracy of Computed Eigenvectors via Optimizing a Rayleigh Quotient
نویسنده
چکیده
This note gives converses to the well-known result: for any vector e u such that sin (u; e u) = O( ), we have e u Ae u e u e u = + O( ) where is an eigenvalue and u is the corresponding eigenvector of a Hermitian matrix A, and \ " denotes complex conjugate transpose. It shows that if e u Ae u=e u e u is close to A's largest eigenvalue, then e u is close to the corresponding eigenvector with an error proportional to the square root of the error in e u Ae u=e u e u as an approximation to the eigenvalue and inverse proportional to the square root of the gap between A's rst two largest eigenvalues. We also have a subspace version of such an converse. Results as such may have interest in applications, such as eigenvector computations in Principal Component Analysis in image processing where eigenvectors may be computed by optimizing Rayleigh quotients with the Conjugate Gradient method. Let A be Hermitian with an eigenvalue and the corresponding eigenvector u. The following is well-known: For any vector e u such that sin (u; e u) = O( ), we have e u Ae u e u e u = + O( ) (1) (see, e.g., [4, 5]), where \ " denotes complex conjugate transpose. But I have not seen any converse to this statement appearing thus far. Such a converse is of interest to situations, e.g., eigenvector computations in Principal Component Analysis in image processing [3, 6], where eigenvectors may be computed by optimizing Rayleigh quotients with the Conjugate Gradient method. One may expect that if (1) holds then e u would be close to an eigenvector u of A. But this may not be true without additional assumptions. Here are two examples: Department of Mathematics, University of Kentucky, Lexington, KY 40506, email: [email protected]. This work was supported in part by the National Science Foundation (NSF) under Grant No. ACI-9721388 and a NSF/CAREER award under Grant No. CCR-9875201.
منابع مشابه
Accuracy of Computed Eigenvectors via Optimizing a Rayleigh Quotient
This paper establishes converses to the well-known result: for any vector ũ such that the sine of the angle sin θ(u, ũ) = O( ), we have ρ(ũ) def = ũ∗Aũ ũ∗ũ = λ+O( ), where λ is an eigenvalue and u is the corresponding eigenvector of a Hermitian matrix A, and “∗” denotes complex conjugate transpose. It shows that if ρ(ũ) is close to A’s largest eigenvalue, then ũ is close to the corresponding ei...
متن کاملHomotopy Method for the Eigenvalue Problem for Partial Diierential Equations
Given a linear self-adjoint partial diierential operator L, the smallest few eigenvalues and eigenfunctions of L are computed by the homotopy (continuation) method. The idea of the method is very simple. From some initial operator L0 with known eigenvalues and eigenfunctions, deene the homotopy H (t) = (1 ? t)L0 + tL; 0 t 1. If the eigenfunctions of H (t0) are known, then they are used to deter...
متن کاملAccuracy of Computed Eigenvectorsvia Optimizing a Rayleigh
This note gives converses to the well-known result: for any vector e u such that sin (u; e u) = O(), we have e u Ae u e u e u = + O(2) where is an eigenvalue and u is the corresponding eigenvector of a Her-mitian matrix A, and \ " denotes complex conjugate transpose. It shows that if e u Ae u=e u e u is close to A's largest eigenvalue, then e u is close to the corresponding eigenvector with an ...
متن کاملRayleigh Quotient Based Optimization Methods For Eigenvalue Problems
Four classes of eigenvalue problems that admit similar min-max principles and the Cauchy interlacing inequalities as the symmetric eigenvalue problem famously does are investigated. These min-max principles pave ways for efficient numerical solutions for extreme eigenpairs by optimizing the so-called Rayleigh quotient functions. In fact, scientists and engineers have already been doing that for...
متن کاملM ar 2 00 8 Two - sided Grassmann - Rayleigh quotient iteration ∗
The two-sided Rayleigh quotient iteration proposed by Ostrowski computes a pair of corresponding left-right eigenvectors of a matrix C. We propose a Grassmannian version of this iteration, i.e., its iterates are pairs of p-dimensional subspaces instead of one-dimensional subspaces in the classical case. The new iteration generically converges locally cubically to the pairs of left-right p-dimen...
متن کامل